In [13]:
%pylab notebook
from math import factorial as fact
The sine function can be approximated with following series: $$sin(x) \approx x - \dfrac{x^3}{3!} + \dfrac{x^5}{5!} - \dfrac{x^7}{7!} + ...$$
(a) Draw in the same graph the sine function and its 7th order polynomial approximation in range $x ∈ [−π, π]$.
(b) Where (x-value) in the given range does the the 7th order polynomial approximation differ most? How much is the true error in that point?
(c) What is the largest relative approximation error (in absolute values) between the 5th order and 7th order polynomials within the given range?
(d) How many terms we need in the approximation in order that we have the value of $sin(x)$ correct at least 4 significant figures everywhere in the given range?
In [14]:
def plot_sin():
x = linspace(-pi, pi, 1000)
y = sin(x)
plot(x, y)
show()
def sinabout(x, N):
y = x
for n in range(1, N):
plus = (-1)**n
y = y + plus*x**(2*n+1)/fact(2*n+1)
return y
def problem01(letter):
x = linspace(-pi, pi, 1000)
if letter == "a":
# Plot sin graph
plot_sin()
# Plot graph with 4 terms
y = sinabout(x, 4)
plot(x, y)
grid()
if letter == "b":
y1 = sin(x)
y2 = sinabout(x, 4)
# Plot the difference between y1 and y2
plot(x, y1 - y2)
show()
grid()
# Calculate true error
error = max(abs(y1 - y2))
print("True error is: {}".format(error))
if letter == "c":
y1 = sinabout(x, 3)
y2 = sinabout(x, 4)
plot(x, y1)
plot(x, y2)
grid()
show()
return y1, y2, x
if letter == "d":
# Define precision
prec = 0.0001
y1 = sin(x)
for j in range(10):
y2 = sinabout(x, j)
err = max(abs(y1 - y2))
if err < prec:
break
# Plot graphs
plot(x, y1)
plot(x, y2)
grid()
show()
# Return error and iteration count
return j, err
else:
"Unable to parse problem!"
In [15]:
figure("Problem 1.a")
problem01("a")
In [16]:
figure("Problem 1.b")
problem01("b")
We can analyze the graph to to see that the approximation differs the most at x = pi or x = -pi. True error is either 0.07522 or -0.07522.
In [17]:
figure("Problem 1.c")
y1, y2, x = problem01("c")
In [18]:
figure("Problem 1.c - Approximation error")
# Calculate approximation error
err = (y2 - y1)/y2
# Plot graphs
plot(x, err*100)
ylabel('Relative appr error (%)')
xlabel('x-axis')
# Show graphs
grid()
show()
Approximation error is roughly at 800000,
In [19]:
figure("Problem1.d")
i, err = problem01("d")
print("It took {} terms to reach 4 digit accuracy. The error value with 4 digits is {}.".format(i, err))
In [20]:
figure("Problem 02")
A = array(([2, -6], [-1, 8]))
b = array(([-18, 40]))
x1, x2 = dot(inv(A), b)
plot(x1, "ro")
plot(x2, "bo")
show()
grid()
By analyzing the graph we can determine that: x1 = 9.6 and x2 = 6.2
Study graphically (do not use any algorithms) the following function $$f(x) = 10.0e^{-x}sin(2.0\pi x)+ 0.5x$$ within the range $0 ≤ x ≤ 10$.
What is (a) the maximum, (b) the minimum, and (c) the largest root of the function within the given range. Give the answers with four significant figures.
In [21]:
figure("Problem 03")
x = linspace(0, 10, 100000)
y = 10.0*exp(-x)*sin(2*pi*x) + 0.5*x
plot(x, y)
ylim(-10, 10)
axhline(y=.0, xmin=0.0, xmax=10.0, color = "r")
grid()
Maximum: x = 0.2264 y = 7.9999
Minimum: x = 0.7223 y = -4.4218
Largest root: x = 1.8920
The following code implements the forward elimination part of theGaussian elimination algorithm. Complete the code with the backward substitution part, e.g. implement the following equations $$x_i = \frac{(b_i - \sum_{j = i+1}^{n} a_{ij}x_j)}{a_{ii}}$$ and test and verify your algorithm by solving your choice of 3 x 3 linear equation system.
In [22]:
def GaussianElimination(a , b):
a = a.copy().astype(float)
b = b.copy().astype(float)
n = len(a)
for k in range(n):
# Check
if a[k, k] == 0 :
print(' Unable to solve ')
return None
# Forward eliminate
for i in range(k+1, n):
factor = a[i, k]/a[k , k]
for j in range(k+1, n):
a[i , j] = a[i , j] - factor*a[k , j]
b[i] = b[i] - factor*b[k]
# Backward substitution
# Do the calculations in place, e.g. x = b
for i in range(n-1, -1, -1):
for j in range(i+1, n):
b[i] = b[i] - a[i, j]*b[j]
b[i] = b[i]/a[i, i]
return b
In [23]:
A = array(([5.0, -1.0, 82.0], [-41.8, 22.0, 9000.01], [1, 0.0, -1.0]))
b = array(([0.0, -800.98, 172.5]))
v1 = GaussianElimination(A, b)
v2 = dot(inv(A), b)
print("Results of function: {}\nResults of dot-product: {}".format(v1, v2))
In [24]:
def gaussSeidel(A, b):
omega = 1.1
# Amount of iterations
p = 1000
# Define tolerance
tol = 1.0e-9
n = len(b)
x = np.zeros(n)
# Generate array based on starting vector
for y in range(n):
x[y] = b[y]/A[y, y]
# Iterate p times
for k in range(p):
xOld = x.copy()
for i in range(n):
s = 0
for j in range(n):
if j != i:
s = s + A[i, j] * x[j]
x[i] = omega/A[i, i] * (b[i] - s) + (1 - omega)*x[i]
# Break execution if we are within the tolerance needed
dx = math.sqrt(np.dot(x-xOld,x-xOld))
if dx < tol: return x
return x
A = array(([-2, 5, 9],
[7, 1, 1],
[-7, 7, -1]))
b = array(([1, 6, -26]))
# Pivot the rows
p = [1, 2, 0]
A = A[p, :]
b = b[p]
v1 = gaussSeidel(A, b)
# Compare results with
v2 = dot(inv(A), b)
print(v1)
print(v2)
The equilibrium equations of the blocks in the spring-block system are $$ 3(x_2 − x_1) − 2x_1 = −80 $$ $$ 3(x_3 − x_2) − 3(x_2 − x_1) = 0 $$ $$ 3(x_4 − x_3) − 3(x_3 − x_2) = 0 $$ $$ 3(x_5 − x_4) − 3(x_4 − x_3) = 60 $$ $$ −2x_5 − 3(x_5 − x_4) = 0 $$ where $x_i$ are the horizontal displacements of the blocks measured in mm. Solve these equations using any functions of your choice found in scipy.linalg package. Show the steps and verify your solution.
Before we start to solve the problem we should convert the equations into matrix format.
$$\begin{bmatrix} -5.0 & 3.0 & 0.0 & 0.0 & 0.0 \\ 3.0 & -6.0 & 3.0 & 0.0 & 0.0 \\ 0.0 & 3.0 & -6.0 & 3.0 & 0.0 \\ 0.0 & 0.0 & 3.0 & -6.0 & 3.0 \\ 0.0 & 0.0 & 0.0 & 3.0 & -5.0 \\ \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ \end{bmatrix} = \begin{bmatrix} -80 \\ 0 \\ 0 \\ 60 \\ 0 \\ \end{bmatrix}$$
In [25]:
A = array(([-5.0, 3.0, 0.0, 0.0, 0.0],
[3.0, -6.0, 3.0, 0.0, 0.0],
[0.0, 3.0, -6.0, 3.0, 0.0],
[0.0, 0.0, 3.0, -6.0, 3.0],
[0.0, 0.0, 0.0, 3.0, -5.0]))
b = array(([-80.0, 0.0, 0.0, 60.0, 0.0]))
# Use solve function from scipy.linalg
x = solve(A, b)
print(x)
Now that we have the values for $x_i$, we can use them to confirm the equations are correct.
In [26]:
y = zeros(len(x))
y[0] = 3 * (x[1] - x[0]) - (2 * x[0])
y[1] = 3 * (x[2] - x[1]) - (3 * (x[1] - x[0]))
y[2] = 3 * (x[3] - x[2]) - (3 * (x[2] - x[1]))
y[3] = 3 * (x[4] - x[3]) - (3 * (x[3] - x[2]))
y[4] = -2 * x[4] - (3 * (x[4] - x[3]))
# Print the results and cut decimals to accuracy needed
print("The result of y = {}".format(around(y), decimals=16))
As we can see, the results of y match the right-hand-side values given in the original equation, so we can safely say the method is accurate within 16 decimals.